Recently, there has been increasing interest in synthesizing data to improve downstream text-to-SQL tasks. In this paper, we first examined the existing synthesized datasets and discovered that state-of-the-art text-to-SQL algorithms did not further improve on popular benchmarks when trained with augmented synthetic data. We observed two shortcomings: illogical synthetic SQL queries from independent column sampling and arbitrary table joins. To address these issues, we propose a novel synthesis framework that incorporates key relationships from schema, imposes strong typing, and conducts schema-distance-weighted column sampling. We also adopt an intermediate representation (IR) for the SQL-to-text task to further improve the quality of the generated natural language questions. When existing powerful semantic parsers are pre-finetuned on our high-quality synthesized data, our experiments show that these models have significant accuracy boosts on popular benchmarks, including new state-of-the-art performance on Spider.
translated by 谷歌翻译
Artificial intelligence methods including deep neural networks (DNN) can provide rapid molecular classification of tumors from routine histology with accuracy that matches or exceeds human pathologists. Discerning how neural networks make their predictions remains a significant challenge, but explainability tools help provide insights into what models have learned when corresponding histologic features are poorly defined. Here, we present a method for improving explainability of DNN models using synthetic histology generated by a conditional generative adversarial network (cGAN). We show that cGANs generate high-quality synthetic histology images that can be leveraged for explaining DNN models trained to classify molecularly-subtyped tumors, exposing histologic features associated with molecular state. Fine-tuning synthetic histology through class and layer blending illustrates nuanced morphologic differences between tumor subtypes. Finally, we demonstrate the use of synthetic histology for augmenting pathologist-in-training education, showing that these intuitive visualizations can reinforce and improve understanding of histologic manifestations of tumor biology.
translated by 谷歌翻译
This paper introduces the shared task of summarizing documents in several creative domains, namely literary texts, movie scripts, and television scripts. Summarizing these creative documents requires making complex literary interpretations, as well as understanding non-trivial temporal dependencies in texts containing varied styles of plot development and narrative structure. This poses unique challenges and is yet underexplored for text summarization systems. In this shared task, we introduce four sub-tasks and their corresponding datasets, focusing on summarizing books, movie scripts, primetime television scripts, and daytime soap opera scripts. We detail the process of curating these datasets for the task, as well as the metrics used for the evaluation of the submissions. As part of the CREATIVESUMM workshop at COLING 2022, the shared task attracted 18 submissions in total. We discuss the submissions and the baselines for each sub-task in this paper, along with directions for facilitating future work in the field.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
We present a retrospective on the state of Embodied AI research. Our analysis focuses on 13 challenges presented at the Embodied AI Workshop at CVPR. These challenges are grouped into three themes: (1) visual navigation, (2) rearrangement, and (3) embodied vision-and-language. We discuss the dominant datasets within each theme, evaluation metrics for the challenges, and the performance of state-of-the-art models. We highlight commonalities between top approaches to the challenges and identify potential future directions for Embodied AI research.
translated by 谷歌翻译
对比方法导致了最近的自我监督表示学习(SSL)的表现激增。诸如BYOL或SIMSIAM之类的最新方法据称将这些对比方法提炼为它们的本质,消除了钟声和哨子,包括负面示例,这些示例不影响下游性能。这些“非对比度”方法的工作非常出色,而无需使用负面因素,即使全球最低限度的崩溃都在淡化。我们通过经验分析了这些非对抗性方法,发现Simsiam对数据集和模型大小非常敏感。特别是,如果模型相对于数据集大小而言太小,则SIMSIAM表示会经历部分维度崩溃。我们提出了一个度量标准来测量这种崩溃的程度,并表明它可以用于预测下游任务性能,而无需任何微调或标签。我们进一步分析建筑设计选择及其对下游性能的影响。最后,我们证明,转移到持续的学习设置充当正规化器并防止崩溃,并且在Imagenet上使用Resnet-18,连续和多上述训练之间的混合物可以提高线性探针精度多达18个百分点。
translated by 谷歌翻译
关于文本到SQL语义解析的最新研究取决于解析器本身或基于简单的启发式方法来理解自然语言查询(NLQ)。合成SQL查询时,没有可用的NLQ的明确语义信息,从而导致不良的概括性能。此外,如果没有词汇级的细粒度查询理解,查询与数据库之间的链接只能依赖模糊的字符串匹配,这会导致实际应用中的次优性能。考虑到这一点,在本文中,我们提出了一个基于令牌级的细粒度查询理解的通用,模块化的神经语义解析框架。我们的框架由三个模块组成:命名实体识别器(NER),神经实体接头(NEL)和神经语义解析器(NSP)。通过共同建模查询和数据库,NER模型可以分析用户意图并确定查询中的实体。 NEL模型将类型的实体链接到数据库中的模式和单元格值。解析器模型利用可用的语义信息并链接结果并根据动态生成的语法合成树结构的SQL查询。新发布的语义解析数据集的Squall实验表明,我们可以在WikiableQuestions(WTQ)测试集上实现56.8%的执行精度,这使最先进的模型的表现优于2.7%。
translated by 谷歌翻译
在广泛的应用中存在针刺问题,包括罕见疾病预测,生态资源管理,欺诈检测和材料特性优化。当相对于数据集大小的最佳条件存在极端不平衡时,就会出现针中的问题。例如,在开放式材料项目数据库中,在146K总材料中,只有0.82%的泊松比为负。但是,当前的最新优化算法并未设计出能够找到这些具有挑战性的多维针中问题的解决方案,从而导致与全球最佳或pige孔变为当地最低限度的缓慢收敛。在本文中,我们提出了一种基于缩放记忆的初始化算法,标题为Zombi,该算法构建了常规的贝叶斯优化原则,以在更少的时间和更少的实验中快速有效地优化针中的针刺问题,并通过解决常见的融合和常见的融合和较少的实验。鸽子问题。 Zombi从先前表现最佳的评估实验中积极提取知识,以在采样搜索范围内迭代放大到全局最佳的“针”,然后预留出低表现的历史实验的记忆,以加速计算时间。我们验证了该算法在两种现实世界中的5维针中的性能上的性能:发现辅助泊松比的发现和发现高热电图的优点材料的发现。与传统的贝叶斯优化相比,Zombi算法显示了400倍的计算时间加速度,并有效地发现了100个以下实验的材料,高达3倍的材料比当前最新算法发现的材料高度优化。
translated by 谷歌翻译
扎根的情况识别(GSR)旨在生成图像的结构化语义摘要,以``类人''事件的理解。具体而言,GSR任务不仅检测出明显的活动动词(例如购买),而且还可以预测所有相应的语义角色(例如代理和商品)。受对象检测和图像字幕任务的启发,现有方法通常采用两个阶段框架:1)检测活动动词,然后2)基于检测到的动词来预测语义角色。显然,这个不合逻辑的框架构成了语义理解的巨大障碍。首先,仅没有语义角色的前检测动词不可避免地无法区分许多类似的日常活动(例如,提供和赠与,买卖)。其次,以封闭的自动回归方式预测语义角色几乎无法利用动词和角色之间的语义关系。为此,在本文中,我们提出了一个新颖的两阶段框架,该框架着重于在动词和角色中利用这种双向关系。在第一阶段,我们没有预测动词,而是推迟检测步骤并假设一个伪标记,其中每个相应的语义角色都从图像中学到了每个相应的语义角色的中间表示。在第二阶段,我们利用变压器层发掘动词和语义角色内的潜在语义关系。借助一组支持图像,替代学习方案旨在同时优化结果:使用与图像相对应的名词更新动词,并使用支持图像中的动词更新名词。关于挑战性SWIG基准测试的广泛实验结果表明,我们翻新的框架在各种指标下的表现优于其他最先进的方法。
translated by 谷歌翻译
我们提出了一个开放域的社交聊天机器人Chirpy Cardinal。为了既有信息又有信息,我们的机器人以一种真实的,情感上的方式与用户聊天。通过将受控的神经产生与脚手架,手写的对话整合在一起,我们让用户和机器人都轮流推动对话,从而产生引人入胜且流利的体验。Chirpy Cardinal部署在Alexa奖Socialbot Grand Challenge的第四次迭代中,每天处理数千次对话,在9个机器人中排名第二,平均用户评级为3.58/5。
translated by 谷歌翻译